فهرست مطالب

Journal of Medical Signals and Sensors
Volume:13 Issue: 2, Apr-Jun 2023

  • تاریخ انتشار: 1402/04/29
  • تعداد عناوین: 13
|
  • Rasoul Sharifian, Behzad Nazari, Saeed Sadri, Peyman Adibi Pages 73-83
    Background and Objective

    The endoscopic diagnosis of pathological changes in the gastroesophageal junction including esophagitis and Barrett’s mucosa is based on the visual detection of two boundaries: mucosal color change between esophagus and stomach, and top endpoint of gastric folds. The presence and pattern of mucosal breaks in the gastroesophageal mucosal junction (Z line) classify esophagitis in patients and the distance between the two boundaries points to the possible columnar lined epithelium. Since visual detection may suffer from intra‑ and interobserver variability, our objective was to define the boundaries automatically based on image processing algorithms, which may enable us to measure the detentions of changes in future studies.

    Methods

    To demarcate the Z‑line, first the artifacts of endoscopy images are eliminated. In the second step, using SUSAN edge detector, Mahalanobis distance criteria, and Gabor filter bank, an initial contour is estimated for the Z‑line. Using region‑based active contours, this initial contour converges to the Z‑line. Finally, by applying morphological operators and Gabor Filter Bank to the region inside of the Z‑line, gastric folds are segmented.

    Results

    To evaluate the results, a database consisting of 50 images and their ground truths were collected. The average dice coefficient and mean square error of Z‑line segmentation were 0.93 and 3.3, respectively. Furthermore, the average boundary distance criteria are 12.3 pixels. In addition, two other criteria that compare the segmentation of folds with several ground truths, i.e., Sweet‑Spot Coverage and Jaccard Index for Golden Standard, are 0.90 and 0.84, respectively.

    Conclusions

    Considering the results, automatic segmentation of Z‑line and gastric folds are matched to the ground truths with appropriate accuracy.

    Keywords: Adenocarcinoma, Barrett’s esophagus, demarcating Z‑line, gastric folds boundary, segmentation of lower esophageal sphincter endoscopy images
  • Zahra Shaterian Pages 84-91
    Background

    This research is focused on the design of highly sensitive microfluidic sensors for the applications in liquid dielectric characterizations including biomedical samples.

    Methods

    Considering the narrow‑band operation of microfluidic sensors based on microwave resonators, in this study, microfluidic sensors based on the variation of transmission phase in microwave transmission lines (TLs) are proposed. It is shown that among different microwave TLs, slot‑lines are an appropriate type of TL for sensing applications because a major portion of the electromagnetic (EM) field passes above the line, where a microfluidic channel can be easily devised.

    Results

    The proposed concept is presented and the functionality of the proposed sensor is validated through full‑wave EM simulations. Moreover, the effects of the dimensions of the microfluidic channel and the thickness of the substrate on the sensitivity of the sensor are studied. Furthermore, taking the advantages of differential circuits and systems into account, a differential version of the microfluidic sensor is also presented. It is shown that the sensitivity of the sensor can be adjusted according to the application. Specifically speaking, the sensitivity of the proposed microfluidic sensor is almost linearly proportional to the length of the channel, i.e., the sensitivity can be doubled by doubling the channel length.

    Conclusions

    In this research, it is shown that using slot‑line TLs highly sensitive microfluidic sensors can be designed for the applications in liquid dielectric characterizations, especially for biomedical samples where small variations of permittivity have to be detected.

    Keywords: Differential sensor, microfluidic sensor, slot‑line
  • Reza Alizadeh Eghtedar, Mahdad Esmaeili, Alireza Peyman, Mohammadreza Akhlaghi, Seyed Hossein Rasta Pages 92-100
    Background

    Automatic segmentation of the choroid on optical coherence tomography (OCT) images helps ophthalmologists in diagnosing eye pathologies. Compared to manual segmentations, it is faster and is not affected by human errors. The presence of the large speckle noise in the OCT images limits the automatic segmentation and interpretation of them. To solve this problem, a new curvelet transform‑based K‑SVD method is proposed in this study. Furthermore, the dataset was manually segmented by a retinal ophthalmologist to draw a comparison with the proposed automatic segmentation technique.

    Methods

    In this study, curvelet transform‑based K‑SVD dictionary learning and Lucy‑Richardson algorithm were used to remove the speckle noise from OCT images. The Outer/Inner Choroidal Boundaries (O/ICB) were determined utilizing graph theory. The area between ICB and outer choroidal boundary was considered as the choroidal region.

    Results

    The proposed method was evaluated on our dataset and the average dice similarity coefficient (DSC) was calculated to be 92.14% ± 3.30% between automatic and manual segmented regions. Moreover, by applying the latest presented open‑source algorithm by Mazzaferri et al. on our dataset, the mean DSC was calculated to be 55.75% ± 14.54%.

    Conclusions

    A significant similarity was observed between automatic and manual segmentations. Automatic segmentation of the choroidal layer could be also utilized in large‑scale quantitative studies of the choroid.

    Keywords: Choroidal segmentation, curvelet transform, graph theory, image processing, opticalcoherence tomography
  • Parisa Gifani, Majid Vafaeezadeh, Mahdi Ghorbani, Ghazal Mehri-Kakavand, Mohamad Pursamimi, Ahmad Shalbaf, Amirhossein Abbaskhani Davanloo Pages 101-109
    Background

    Diagnosis of the stage of COVID‑19 patients using the chest computed tomography (CT) can help the physician in making decisions on the length of time required for hospitalization and adequate selection of patient care. This diagnosis requires very expert radiologists who are not available everywhere and is also tedious and subjective. The aim of this study is to propose an advanced machine learning system to diagnose the stages of COVID‑19 patients including normal, early, progressive, peak, and absorption stages based on lung CT images, using an automatic deep transfer learning ensemble.

    Methods

    Different strategies of deep transfer learning were used which were based on pretrained convolutional neural networks (CNNs). Pretrained CNNs were fine‑tuned on the chest CT images, and then, the extracted features were classified by a softmax layer. Finally, we built an ensemble method based on majority voting of the best deep transfer learning outputs to further improve the recognition performance.

    Results

    The experimental results from 689 cases indicate that the ensemble of three deep transfer learning outputs based on EfficientNetB4, InceptionResV3, and NasNetlarge has the highest results in diagnosing the stage of COVID‑19 with an accuracy of 91.66%.

    Conclusion

    The proposed method can be used for the classification of the stage of COVID‑19 disease with good accuracy to help the physician in making decisions on patient care.

    Keywords: Computed tomography, convolutional neural network, ensemble, stage of COVID‑19, transfer learning
  • Mohammad Amiri, Manizheh Ranjbar, Gholamreza Fallah Mohammadi Pages 110-117
    Background

    The lung computed tomography (CT) scan contains valuable information and patterns that provide the possibility of early diagnosis of COVID‑19 disease as a global pandemic by the image processing software. In this research, based on deep learning of artificial intelligence, the software has been designed that is used clinically to diagnose COVID‑19 disease with high accuracy.

    Methods

    Convolutional neural network architecture developed based on Inception‑V3 for deep learning of lung image patterns, feature extraction, and image classification. The theory of transfer learning was utilized to increase the learning power of the system. Changes applied in the network layers to increase the detection power. The process of learning was repeated 30 times. All diagnostic statistical parameters of the diagnostic were analyzed to validate the software.

    Results

    Based on the data of Imam Khomeini Hospital in Sari, the validity, sensitivity, and accuracy of the software in diagnosing of affected to COVID‑19 and nonaffected to it were obtained 98%, 98%, and 98%, respectively. Diagnostic statistical parameters on some data were 100%. The modified algorithm of Inception‑V3 applied to heterogeneous data also had acceptable precision.

    Conclusion

    The proposed basic architecture of Inception‑v3 utilized for this research has an admissible speed and exactness in learning CT scan images of patients’ lungs, and diagnosis of COVID‑19 pneumonia, which can be utilized clinically as a powerful diagnostic tool.

    Keywords: Computed tomography image, COVID‑19 pneumonia, deep learning, machine learning
  • Behrooz Ghane, Alireza Karimian, Samaneh Mostafapour, Faezeh Gholamiankhak, Seyedjafar Shojaerazavi, Hossein Arabi Pages 118-128
    Background

    Computed tomography (CT) scan is one of the main tools to diagnose and grade COVID‑19 progression. To avoid the side effects of CT imaging, low‑dose CT imaging is of crucial importance to reduce population absorbed dose. However, this approach introduces considerable noise levels in CT images.

    Methods

    In this light, we set out to simulate four reduced dose levels (60% dose, 40% dose, 20% dose, and 10% dose) of standard CT imaging using Beer– Lambert’s law across 49 patients infected with COVID‑19. Then, three denoising filters, namely Gaussian, bilateral, and median, were applied to the different low‑dose CT images, the quality of which was assessed prior to and after the application of the various filters via calculation of peak signal‑to‑noise ratio, root mean square error (RMSE), structural similarity index measure, and relative CT‑value bias, separately for the lung tissue and whole body.

    Results

    The quantitative evaluation indicated that 10%‑dose CT images have inferior quality (with RMSE = 322.1 ± 104.0 HU and bias = 11.44% ± 4.49% in the lung) even after the application of the denoising filters. The bilateral filter exhibited superior performance to suppress the noise and recover the underlying signals in low‑dose CT images compared to the other denoising techniques. The bilateral filter led to RMSE and bias of 100.21 ± 16.47 HU and − 0.21% ± 1.20%, respectively, in the lung regions for 20%‑dose CT images compared to the Gaussian filter with RMSE = 103.46 ± 15.70 HU and bias = 1.02% ± 1.68% and median filter with RMSE = 129.60 ± 18.09 HU and bias = −6.15% ± 2.24%.

    Conclusions

    The 20%‑dose CT imaging followed by the bilateral filtering introduced a reasonable compromise between image quality and patient dose reduction.

    Keywords: COVID‑19, denoising filters, image quality, low‑dose computed tomography, patientdose
  • Seyed Salman Zakariaee, Hossein Salmanipour, MohammadReza Kaffashian Pages 129-135
    Background

    A significant discrepancy between the results of previous studies is identified regarding the diagnostic efficacy of chest computed tomography (CT) for coronavirus disease 2019 (COVID‑19). We aimed to evaluate the diagnostic efficacy of chest CT for COVID‑19.

    Methods

    Suspected cases of COVID‑19 with fever, cough, dyspnea, and evidence of pneumonia on chest CT scan were enrolled in the study. The accuracy, sensitivity, and specificity of chest CT were determined according to real‑time reverse transcriptase‑polymerase chain reaction (RT‑PCR) results as the gold standard method.

    Results

    The study population comprised 356 suspected cases of COVID‑19 (174 men and 182 women; age range 3–96 years; mean age ± standard deviation, 55.21 ± 18.38 years). COVID‑19 patients were diagnosed using chest CT with 89.8% sensitivity, 78.1% accuracy, 21.3% specificity, 84.7% positive predictive value, and 30.23% negative predictive value. The odds ratio was 2.39 (95% confidence interval, 1.16–4.91). Typical CT manifestations of COVID‑19 were observed in 48 (13.5%) patients with negative RT‑PCR results and 30 (8.4%) patients with confirmed positive RT‑PCR results had no radiological manifestations. Kappa coefficient of chest CT for diagnosis of COVID‑19 was 0.78.

    Conclusion

    The results show that when RT‑PCR results are negative, chest CT could be considered as a complementary diagnostic method for the diagnosis of COVID‑19 patients. A more comprehensive diagnostic method could be established by combining the chest CT examination, clinical symptoms, and RT‑PCR assay.

    Keywords: Coronavirus disease 2019, pneumonia, severe acute respiratory syndromecoronavirus 2, X‑ray computed tomography
  • Hossein Marofi, Adele Rafati, Pooria Gill Pages 136-143
    Background

    We described here an aptamer‑based magnetic nanoprobe for measuring the amount of chloramphenicol (CAP) in milk.

    Methods

    The nanoprobe presented in this method consists of a magnetic nanoparticle conjugated to a specific CAP aptamer. If the target is detected in the sample, the nanoprobe binds to it, and the aptamer forms a G‑quadruplex structure. This structure mimics the peroxidase activity in the presence of the hemin cofactor. If tetramethylbenzidine is added to the sample containing the nanoprobe, a blue color light is observed. After adding a stop reagent solution, the color produced is measured by a microplate reader and a portable meter.

    Results

    This study proves a 99% positive linear relationship between the microplate reader’s results and the portable meter results.

    Conclusion

    Conjugation of the aptamer to magnetic nanoparticles and applying magnetic separation operations change the nanoprobe performance by 11% for both mentioned devices.

    Keywords: Chloramphenicol, G‑quadruplex aptamer, MNP, Portable meter
  • Nahid Shami, Maryam Atarod, Parvaneh Shokrani, Nadia Najafizade Pages 144-152
    Background

    This study aimed to optimize efficiency in Monte Carlo (MC) simulation using sensitivity analysis of a beam model.

    Methods

    The BEAMnrc‑based model of 6 MV beam of a Siemens Primus linac was developed. For sensitivity analysis, the effect of the electron source, treatment head, and virtual phantom specifications on calculated percent depth dose (PDD) and lateral dose profiles was evaluated.

    Results

    The optimum mean energy (E) and the full width at half maximum (FWHM) of the intensity distribution of the electron beam were calculated as 6.7 MeV and 3 mm, respectively. Increasing E from 6.1 to 6.7 MeV, increased the PDD in the fall‑off region by 4.70% and decreased the lateral profile by 8.76%. Changing the FWHM had a significant effect on the buildup region of PDD and the horns and out‑of‑field regions of the lateral profile. Increasing the collimators opening by 0.5 mm, PDD increased by 2.13% and the central and penumbra regions of profiles decreased by 1.98% and 11.40% respectively. Collimator properties such as thickness and density were effective in changing the penumbra (11.32% for 0.25 cm increment) and the out‑of‑field (22.82% for 3 g/cm3 ) regions of the lateral profiles.

    Conclusion

    Analysis of a 6 MV model showed that PDD profiles were more sensitive to changes in energy than to FWHM of the electron source. The lateral profiles were sensitive to E, FWHM, and collimator opening. The density of the collimator affected only the out‑of‑field region of lateral profiles. The findings of this study may be used to make benchmarking of an MC beam model more efficient.

    Keywords: Benchmarking, gamma analysis, monte carlo, radiotherapy, siemens primus
  • Marina Gomez-Hernández, Natali Olaya-Mira, Carolina Viloria-Barragán, Julieta Henao-Pérez, Jessica María Rojas-Mora, Gloria Díaz-Londoño Pages 153-159
    Background

    Multiple sclerosis (MS) is a progressive and neurodegenerative disease of the central nervous system. Its symptoms vary greatly, which makes its diagnosis complex, expensive, and time‑consuming. One of its most prevalent symptoms is muscle fatigue. It occurs in about 92% of patients with MS (PwMS) and is defined as a decrease in maximal strength or energy production in response to contractile activity. This article aims to compare the behavior of a healthy control (HC) with that of a patient with MS before and after muscle fatigue.

    Methods

    For this purpose, a static baropodometric test and a dynamic electromyographic analysis are performed to calculate the area of the stabilometric ellipse, the remitting MS (RMS) value, and the sample entropy (SampEn) of the signals, as a proof of concept to explore the feasibility of this test in the muscle fatigue quantitative analysis; in addition, the statistical analysis was realized to verify the results.

    Results

    According to the results, the ellipse area increased in the presence of muscle fatigue, indicating a decrease in postural stability. Likewise, the RMS value increased in the MS patient and decreased in the HC subject and the opposite behavior in the SampEn was observed in the presence of muscle fatigue.

    Conclusion

    Thus, this study demonstrates that SampEn is a viable parameter to estimate muscle fatigue in PwMS and other neuromuscular diseases.

    Keywords: Baropodometry, electromyography, multiple sclerosis, muscle fatigue, sample entropy
  • Fateme Vahabi, Saeed Kermani, Zahra Vahabi, Nader Pestechian Pages 160-164

    Automating the camera Lucida method which is a standard way for focusing microscopic images is a very challenging study for many scientists. Hence, actually combining hardware and software to automate microscopic imaging systems is one of the most important issues in the field of medicine as well. This idea reduces scanning time and increases the accuracy of user’s results in this field. Closed‑loop control system has been designed and implemented in the hardware part to move the stage in predefined limits of 15°. This system produces 50 consecutive images from parasites at the mentioned spatial distances in two directions of the z‑axis. Then, by introducing our proposed relational software with combining images, a high‑contrast image can be presented. This colored image is focused on many subparts of the sample even with different ruggedness. After implementing the closed‑loop controller, stages movement was repeated eight times with an average step displacement of 20 μm which were measured in two directions of the z‑axis by a digital micrometer. On average, the movement’s error was 1 μm. In software, the edge intensity energy index has been calculated for image quality evaluation. The standard camera Lucida method has been simulated with acceptable results based on experts’ opinions and also mean squared error parameters. Mechanical movement in stage has an accuracy of about 95% which will meet the expectations of laboratory user. Although output‑focused colored images from our combining software can be replaced by the traditional fully accepted Camera Lucida method.

    Keywords: Automatic focusing, camera Lucida, closed‑loop control, edge intensity energy, opticalmicroscope, the stack of images
  • Amir Bazdar, Amir Hatamian, Javad Ostadieh, Javad Nourinia, Changiz Ghobadi, Ehsan Mostafapour Pages 165-172

    It has been a long time since we use magnetic resonance imaging (MRI) to detect brain diseases and many useful techniques have been developed for this task. However, there is still a potential for further improvement of classification of brain diseases in order to be sure of the results. In this research we presented, for the first time, a non-linear feature extraction method from the MRI sub-images that are obtained from the three levels of the two-dimensional Dual tree complex wavelet transform (2D DT-CWT) in order to classify multiple brain disease. After extracting the non-linear features from the sub-images, we used the spectral regression discriminant analysis (SRDA) algorithm to reduce the classifying features. Instead of using the deep neural networks that are computationally expensive, we proposed the Hybrid RBF network that uses the k-means and recursive least squares (RLS) algorithm simultaneously in its structure for classification. To evaluate the performance of RBF networks with hybrid learning algorithms, we classify nine brain diseases based on MRI processing using these networks, and compare the results with the previously presented classifiers including, supporting vector machines (SVM) and K-nearest neighbour (KNN). Comprehensive comparisons are made with the recently proposed cases by extracting various types and numbers of features. Our aim in this paper is to reduce the complexity and improve the classifying results with the hybrid RBF classifier and the results showed 100 percent classification accuracy in both the two class and the multiple classification of brain diseases in 8 and 10 classes. In this paper, we provided a low computational and precise method for brain MRI disease classification. the results show that the proposed method is not only accurate but also computationally reasonable.

    Keywords: Brain magnetic resonance imaging classification, feature reduction, k‑means algorithm, nonlinear features, radial basis function networks
  • GS Shashi Kumar, Niranjana Sampathila, Roshan Joy Martis Pages 173-182

    Recognition of human emotion states for affective computing based on Electroencephalogram (EEG) signal is an active yet challenging domain of research. In this study we propose an emotion recognition framework based on 2-dimensional valence-arousal model to classify High Arousal-Positive Valence (Happy) and Low Arousal-Negative Valence (Sad) emotions. In total 34 features from time, frequency, statistical and nonlinear domain are studied for their efficacy using Artificial Neural Network (ANN). The EEG signals from various electrodes in different scalp regions viz., frontal, parietal, temporal, occipital are studied for performance. It is found that ANN trained using features extracted from the frontal region has outperformed that of all other regions with an accuracy of 93.25%. The results indicate that the use of smaller set of electrodes for emotion recognition that can simplify the acquisition and processing of EEG data. The developed system can aid immensely to the physicians in their clinical practice involving emotional states, continuous monitoring, and development of wearable sensors for emotion recognition.

    Keywords: Artificial neural network, electroencephalogram, emotion recognition, valence‑arousalmodel